Klasyfikacja komponentów komputera
Cel badania
Celem jest stworzenie jak najlepszej sieci neuronowej, przeznaczonej do klasyfikacji czternastu komponentów komputera.
Opis zbioru danych
Zbiór danych pochodzi z Kaggle. Zdjęcia zostały zebrane z platformy Google Images i przekonwertowane na format 256x256. Zawiera on 3279 zdjęć różnych komponentów komputera.
Przedstawienie klas zbioru danych
Zdjęcia podzielone są na czternaście klas opisujących następujące komponenty:
kableobudowaprocesorkarta graficznadysk twardysłuchawkiklawiaturamikrofonmonitorpłyta głównamyszkapamięć RAMgłośnikikamera internetowa
Przykładowe zdjęcia
Podział zbioru na część treningową, walidacyjną i testową
Ustaliłem następujący podział zbioru:
- treningowy - 0.7
- walidacyjny - 0.15
- testowy - 0.15
Podczas podziału nadałem takie wartości klasom:
- 0 -
cables - 1 -
case - 2 -
cpu - 3 -
gpu - 4 -
hdd - 5 -
headset - 6 -
keyboard - 7 -
microphone - 8 -
monitor - 9 -
motherboard - 10 -
mouse - 11 -
ram - 12 -
speakers - 13 -
webcam
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Treningowy | 208 | 197 | 99 | 109 | 183 | 184 | 187 | 149 | 179 | 168 | 147 | 158 | 207 | 114 |
| Walidacyjny | 45 | 42 | 21 | 23 | 39 | 40 | 40 | 32 | 38 | 36 | 31 | 34 | 44 | 25 |
| Testowy | 45 | 43 | 22 | 24 | 40 | 40 | 41 | 33 | 39 | 37 | 32 | 34 | 45 | 25 |
Zbiór nie jest zbalansowany, dużo mniej jest zdjęć procesorów i kamer internetowych od reszty komponentów. Najwięcej jest zdjęć kabli oraz głośników.
Augmentacja obrazów
Przed modelowaniem zdjęcia przekształciłem w następujący sposób:
rescale= 1/255rotation_range= 40width_shift_range= 0.2height_shift_range= 0.2shear_range= 0.2zoom_range= 0.2horizontal_flip= T
Dodatkowo zmieniłem rozmiar zdjęć z 256x256 na 150x150, aby zoptymalizować proces uczenia. batch_size ustawiłem na 32.
Budowa sieci neuronowych
Jako funkcję aktywacji wybrałem relu, a do ostatniej warstwy - softmax. Modele uczone były przez różne ilości epok z 71 krokami w każdej z nich. Walidacja odbyła się na 15 krokach. Sprawdzenie na zbiorze testowym wykonałem na podstawie 10 kroków.
Użyłem kategorycznej entropii krzyżowej jako funkcji straty i adam jako optymalizatora. Do domyślnych miar dopasowania wybrałem dodatkowo recall, precision i auc, ze względu na niezbalansowane klasy.
Modele zbudowane za pomocą warstw gęstych i konwolucyjnych
Pierwszą sieć zbudowałem z założeniem, aby nie była zbytnio skomplikowana. Składa się z 5 warstw gęstych o 16, 32,64,32 i 14 neuronach.
Model: "sequential_28"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
dense_157 (Dense) (None, 150, 150, 16) 64
flatten_28 (Flatten) (None, 360000) 0
dense_156 (Dense) (None, 32) 11520032
dense_155 (Dense) (None, 64) 2112
dense_154 (Dense) (None, 32) 2080
dense_153 (Dense) (None, 14) 462
================================================================================
Total params: 11524750 (43.96 MB)
Trainable params: 11524750 (43.96 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________
Na wykresie możemy zauważyć, że model bardzo słabo się dopasował do danych. Wraz z upływem czasu nie poprawiał się, dlatego zakończyłem uczenie po 20 epokach.
Dodałem jedną warstwę gęstą, oraz warstwy dropout, a do istniejących zmieniłem liczby neuronów na 64,128,512,512, 32 i 14.
Model: "sequential_31"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
dense_175 (Dense) (None, 150, 150, 32) 128
flatten_31 (Flatten) (None, 720000) 0
dense_174 (Dense) (None, 64) 46080064
dense_173 (Dense) (None, 128) 8320
dense_172 (Dense) (None, 64) 8256
dropout_28 (Dropout) (None, 64) 0
dense_171 (Dense) (None, 32) 2080
dense_170 (Dense) (None, 14) 462
================================================================================
Total params: 46099310 (175.85 MB)
Trainable params: 46099310 (175.85 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________
Model poprawił się względem pierwszego, ale nie osiągnął akceptowalnego wyniku. Dodatkowo po długim czasie uczenia wystapiło przeuczenie. Czas na zmianę strategii.
Stworzyłem nową architekturę, w której użyłem 4 warstwy konwolucyjne (32,64,64,32) z jądrami rozmiaru 3x3, pomiędzy którymi znajdują się warstwy max pooling z rozmiarem 2x2. Następnie dołożyłem cztery warstwy gęste (64,64,32,14) i dwie warstwy dropout z parametrem rate równym 0.2.
Model: "sequential_2"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
conv2d_11 (Conv2D) (None, 148, 148, 32) 896
max_pooling2d_11 (MaxPooling2D) (None, 74, 74, 32) 0
conv2d_10 (Conv2D) (None, 72, 72, 64) 18496
max_pooling2d_10 (MaxPooling2D) (None, 36, 36, 64) 0
conv2d_9 (Conv2D) (None, 34, 34, 64) 36928
max_pooling2d_9 (MaxPooling2D) (None, 17, 17, 64) 0
conv2d_8 (Conv2D) (None, 15, 15, 32) 18464
max_pooling2d_8 (MaxPooling2D) (None, 7, 7, 32) 0
flatten_2 (Flatten) (None, 1568) 0
dense_10 (Dense) (None, 64) 100416
dropout_4 (Dropout) (None, 64) 0
dense_9 (Dense) (None, 64) 4160
dropout_3 (Dropout) (None, 64) 0
dense_8 (Dense) (None, 32) 2080
dense_7 (Dense) (None, 14) 462
================================================================================
Total params: 181902 (710.55 KB)
Trainable params: 181902 (710.55 KB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________
Wyniki jeszcze bardziej się poprawiły.
Zmianie uległa część z warstwami gęstymi. Chciałem spróbować powiększyć liczbę neuronów w warstwie po spłaszczeniu.
Model: "sequential_3"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
conv2d_15 (Conv2D) (None, 148, 148, 32) 896
max_pooling2d_15 (MaxPooling2D) (None, 74, 74, 32) 0
conv2d_14 (Conv2D) (None, 72, 72, 64) 18496
max_pooling2d_14 (MaxPooling2D) (None, 36, 36, 64) 0
conv2d_13 (Conv2D) (None, 34, 34, 64) 36928
max_pooling2d_13 (MaxPooling2D) (None, 17, 17, 64) 0
conv2d_12 (Conv2D) (None, 15, 15, 32) 18464
max_pooling2d_12 (MaxPooling2D) (None, 7, 7, 32) 0
flatten_3 (Flatten) (None, 1568) 0
dense_13 (Dense) (None, 128) 200832
dropout_5 (Dropout) (None, 128) 0
dense_12 (Dense) (None, 32) 4128
dense_11 (Dense) (None, 14) 462
================================================================================
Total params: 280206 (1.07 MB)
Trainable params: 280206 (1.07 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________
Niestety nie przyniosło to pozytywnego wyniku, model nie jest lepszy od poprzedniego.
Modele z zastosowaniem wstępnie wytrenowanych sieci
W tym modelu zastosowałem wstępnie wytrenowaną sieć vgg16. Po niej znajdują się warstwy gęste o 128,128,32 i 14 neuronach. Wagi wytrenowanej sieci zostały zamrożone, aby nie uczyć jej od nowa.
Model: "vgg16"
________________________________________________________________________________
Layer (type) Output Shape Param #
================================================================================
input_1 (InputLayer) [(None, 150, 150, 3)] 0
block1_conv1 (Conv2D) (None, 150, 150, 64) 1792
block1_conv2 (Conv2D) (None, 150, 150, 64) 36928
block1_pool (MaxPooling2D) (None, 75, 75, 64) 0
block2_conv1 (Conv2D) (None, 75, 75, 128) 73856
block2_conv2 (Conv2D) (None, 75, 75, 128) 147584
block2_pool (MaxPooling2D) (None, 37, 37, 128) 0
block3_conv1 (Conv2D) (None, 37, 37, 256) 295168
block3_conv2 (Conv2D) (None, 37, 37, 256) 590080
block3_conv3 (Conv2D) (None, 37, 37, 256) 590080
block3_pool (MaxPooling2D) (None, 18, 18, 256) 0
block4_conv1 (Conv2D) (None, 18, 18, 512) 1180160
block4_conv2 (Conv2D) (None, 18, 18, 512) 2359808
block4_conv3 (Conv2D) (None, 18, 18, 512) 2359808
block4_pool (MaxPooling2D) (None, 9, 9, 512) 0
block5_conv1 (Conv2D) (None, 9, 9, 512) 2359808
block5_conv2 (Conv2D) (None, 9, 9, 512) 2359808
block5_conv3 (Conv2D) (None, 9, 9, 512) 2359808
block5_pool (MaxPooling2D) (None, 4, 4, 512) 0
================================================================================
Total params: 14714688 (56.13 MB)
Trainable params: 14714688 (56.13 MB)
Non-trainable params: 0 (0.00 Byte)
________________________________________________________________________________
Model: "sequential_10"
________________________________________________________________________________
Layer (type) Output Shape Param # Trainable
================================================================================
vgg16 (Functional) (None, 4, 4, 512) 14714688 N
flatten_10 (Flatten) (None, 8192) 0 Y
dense_46 (Dense) (None, 128) 1048704 Y
dropout_24 (Dropout) (None, 128) 0 Y
dense_45 (Dense) (None, 128) 16512 Y
dropout_23 (Dropout) (None, 128) 0 Y
dense_44 (Dense) (None, 32) 4128 Y
dense_43 (Dense) (None, 14) 462 Y
================================================================================
Total params: 15784494 (60.21 MB)
Trainable params: 1069806 (4.08 MB)
Non-trainable params: 14714688 (56.13 MB)
________________________________________________________________________________
Wyniki są znacząco lepsze od poprzednich modeli. Nie występuje duże przeuczenie, poza sytuacją precision.
Zmieniłem wstępnie wytrenowaną sieć na mobilenet.
Model: "mobilenet_1.00_224"
________________________________________________________________________________
Layer (type) Output Shape Param # Trainable
================================================================================
input_2 (InputLayer) [(None, 150, 150, 3)] 0 Y
conv1 (Conv2D) (None, 75, 75, 32) 864 Y
conv1_bn (BatchNormalization (None, 75, 75, 32) 128 Y
)
conv1_relu (ReLU) (None, 75, 75, 32) 0 Y
conv_dw_1 (DepthwiseConv2D) (None, 75, 75, 32) 288 Y
conv_dw_1_bn (BatchNormaliza (None, 75, 75, 32) 128 Y
tion)
conv_dw_1_relu (ReLU) (None, 75, 75, 32) 0 Y
conv_pw_1 (Conv2D) (None, 75, 75, 64) 2048 Y
conv_pw_1_bn (BatchNormaliza (None, 75, 75, 64) 256 Y
tion)
conv_pw_1_relu (ReLU) (None, 75, 75, 64) 0 Y
conv_pad_2 (ZeroPadding2D) (None, 76, 76, 64) 0 Y
conv_dw_2 (DepthwiseConv2D) (None, 37, 37, 64) 576 Y
conv_dw_2_bn (BatchNormaliza (None, 37, 37, 64) 256 Y
tion)
conv_dw_2_relu (ReLU) (None, 37, 37, 64) 0 Y
conv_pw_2 (Conv2D) (None, 37, 37, 128) 8192 Y
conv_pw_2_bn (BatchNormaliza (None, 37, 37, 128) 512 Y
tion)
conv_pw_2_relu (ReLU) (None, 37, 37, 128) 0 Y
conv_dw_3 (DepthwiseConv2D) (None, 37, 37, 128) 1152 Y
conv_dw_3_bn (BatchNormaliza (None, 37, 37, 128) 512 Y
tion)
conv_dw_3_relu (ReLU) (None, 37, 37, 128) 0 Y
conv_pw_3 (Conv2D) (None, 37, 37, 128) 16384 Y
conv_pw_3_bn (BatchNormaliza (None, 37, 37, 128) 512 Y
tion)
conv_pw_3_relu (ReLU) (None, 37, 37, 128) 0 Y
conv_pad_4 (ZeroPadding2D) (None, 38, 38, 128) 0 Y
conv_dw_4 (DepthwiseConv2D) (None, 18, 18, 128) 1152 Y
conv_dw_4_bn (BatchNormaliza (None, 18, 18, 128) 512 Y
tion)
conv_dw_4_relu (ReLU) (None, 18, 18, 128) 0 Y
conv_pw_4 (Conv2D) (None, 18, 18, 256) 32768 Y
conv_pw_4_bn (BatchNormaliza (None, 18, 18, 256) 1024 Y
tion)
conv_pw_4_relu (ReLU) (None, 18, 18, 256) 0 Y
conv_dw_5 (DepthwiseConv2D) (None, 18, 18, 256) 2304 Y
conv_dw_5_bn (BatchNormaliza (None, 18, 18, 256) 1024 Y
tion)
conv_dw_5_relu (ReLU) (None, 18, 18, 256) 0 Y
conv_pw_5 (Conv2D) (None, 18, 18, 256) 65536 Y
conv_pw_5_bn (BatchNormaliza (None, 18, 18, 256) 1024 Y
tion)
conv_pw_5_relu (ReLU) (None, 18, 18, 256) 0 Y
conv_pad_6 (ZeroPadding2D) (None, 19, 19, 256) 0 Y
conv_dw_6 (DepthwiseConv2D) (None, 9, 9, 256) 2304 Y
conv_dw_6_bn (BatchNormaliza (None, 9, 9, 256) 1024 Y
tion)
conv_dw_6_relu (ReLU) (None, 9, 9, 256) 0 Y
conv_pw_6 (Conv2D) (None, 9, 9, 512) 131072 Y
conv_pw_6_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_pw_6_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_dw_7 (DepthwiseConv2D) (None, 9, 9, 512) 4608 Y
conv_dw_7_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_dw_7_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pw_7 (Conv2D) (None, 9, 9, 512) 262144 Y
conv_pw_7_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_pw_7_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_dw_8 (DepthwiseConv2D) (None, 9, 9, 512) 4608 Y
conv_dw_8_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_dw_8_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pw_8 (Conv2D) (None, 9, 9, 512) 262144 Y
conv_pw_8_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_pw_8_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_dw_9 (DepthwiseConv2D) (None, 9, 9, 512) 4608 Y
conv_dw_9_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_dw_9_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pw_9 (Conv2D) (None, 9, 9, 512) 262144 Y
conv_pw_9_bn (BatchNormaliza (None, 9, 9, 512) 2048 Y
tion)
conv_pw_9_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_dw_10 (DepthwiseConv2D) (None, 9, 9, 512) 4608 Y
conv_dw_10_bn (BatchNormaliz (None, 9, 9, 512) 2048 Y
ation)
conv_dw_10_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pw_10 (Conv2D) (None, 9, 9, 512) 262144 Y
conv_pw_10_bn (BatchNormaliz (None, 9, 9, 512) 2048 Y
ation)
conv_pw_10_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_dw_11 (DepthwiseConv2D) (None, 9, 9, 512) 4608 Y
conv_dw_11_bn (BatchNormaliz (None, 9, 9, 512) 2048 Y
ation)
conv_dw_11_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pw_11 (Conv2D) (None, 9, 9, 512) 262144 Y
conv_pw_11_bn (BatchNormaliz (None, 9, 9, 512) 2048 Y
ation)
conv_pw_11_relu (ReLU) (None, 9, 9, 512) 0 Y
conv_pad_12 (ZeroPadding2D) (None, 10, 10, 512) 0 Y
conv_dw_12 (DepthwiseConv2D) (None, 4, 4, 512) 4608 Y
conv_dw_12_bn (BatchNormaliz (None, 4, 4, 512) 2048 Y
ation)
conv_dw_12_relu (ReLU) (None, 4, 4, 512) 0 Y
conv_pw_12 (Conv2D) (None, 4, 4, 1024) 524288 Y
conv_pw_12_bn (BatchNormaliz (None, 4, 4, 1024) 4096 Y
ation)
conv_pw_12_relu (ReLU) (None, 4, 4, 1024) 0 Y
conv_dw_13 (DepthwiseConv2D) (None, 4, 4, 1024) 9216 Y
conv_dw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 Y
ation)
conv_dw_13_relu (ReLU) (None, 4, 4, 1024) 0 Y
conv_pw_13 (Conv2D) (None, 4, 4, 1024) 1048576 Y
conv_pw_13_bn (BatchNormaliz (None, 4, 4, 1024) 4096 Y
ation)
conv_pw_13_relu (ReLU) (None, 4, 4, 1024) 0 Y
================================================================================
Total params: 3228864 (12.32 MB)
Trainable params: 3206976 (12.23 MB)
Non-trainable params: 21888 (85.50 KB)
________________________________________________________________________________
Model: "sequential_18"
________________________________________________________________________________
Layer (type) Output Shape Param # Trainable
================================================================================
mobilenet_1.00_224 (Function (None, 4, 4, 1024) 3228864 N
al)
flatten_18 (Flatten) (None, 16384) 0 Y
dense_78 (Dense) (None, 128) 2097280 Y
dropout_40 (Dropout) (None, 128) 0 Y
dense_77 (Dense) (None, 128) 16512 Y
dropout_39 (Dropout) (None, 128) 0 Y
dense_76 (Dense) (None, 32) 4128 Y
dense_75 (Dense) (None, 14) 462 Y
================================================================================
Total params: 5347246 (20.40 MB)
Trainable params: 2118382 (8.08 MB)
Non-trainable params: 3228864 (12.32 MB)
________________________________________________________________________________
Model dopasował się marginalnie lepiej od vgg16. Zniknęło również przeuczenie na podstawie precision.
Zmieniłem wstępnie wytrenowaną sieć na densenet.
Model: "densenet121"
________________________________________________________________________________
Layer (type) Output Shape Para Connected to Trainable
m #
================================================================================
input_3 (InputLay [(None, 150, 150, 0 [] Y
er) 3)]
zero_padding2d (Z (None, 156, 156, 3 0 ['input_3[0][0]'] Y
eroPadding2D) )
conv1/conv (Conv2 (None, 75, 75, 64) 9408 ['zero_padding2d[0 Y
D) ][0]']
conv1/bn (BatchNo (None, 75, 75, 64) 256 ['conv1/conv[0][0] Y
rmalization) ']
conv1/relu (Activ (None, 75, 75, 64) 0 ['conv1/bn[0][0]'] Y
ation)
zero_padding2d_1 (None, 77, 77, 64) 0 ['conv1/relu[0][0] Y
(ZeroPadding2D) ']
pool1 (MaxPooling (None, 38, 38, 64) 0 ['zero_padding2d_1 Y
2D) [0][0]']
conv2_block1_0_bn (None, 38, 38, 64) 256 ['pool1[0][0]'] Y
(BatchNormalizat
ion)
conv2_block1_0_re (None, 38, 38, 64) 0 ['conv2_block1_0_b Y
lu (Activation) n[0][0]']
conv2_block1_1_co (None, 38, 38, 128 8192 ['conv2_block1_0_r Y
nv (Conv2D) ) elu[0][0]']
conv2_block1_1_bn (None, 38, 38, 128 512 ['conv2_block1_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block1_1_re (None, 38, 38, 128 0 ['conv2_block1_1_b Y
lu (Activation) ) n[0][0]']
conv2_block1_2_co (None, 38, 38, 32) 3686 ['conv2_block1_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block1_conc (None, 38, 38, 96) 0 ['pool1[0][0]', Y
at (Concatenate) 'conv2_block1_2_c
onv[0][0]']
conv2_block2_0_bn (None, 38, 38, 96) 384 ['conv2_block1_con Y
(BatchNormalizat cat[0][0]']
ion)
conv2_block2_0_re (None, 38, 38, 96) 0 ['conv2_block2_0_b Y
lu (Activation) n[0][0]']
conv2_block2_1_co (None, 38, 38, 128 1228 ['conv2_block2_0_r Y
nv (Conv2D) ) 8 elu[0][0]']
conv2_block2_1_bn (None, 38, 38, 128 512 ['conv2_block2_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block2_1_re (None, 38, 38, 128 0 ['conv2_block2_1_b Y
lu (Activation) ) n[0][0]']
conv2_block2_2_co (None, 38, 38, 32) 3686 ['conv2_block2_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block2_conc (None, 38, 38, 128 0 ['conv2_block1_con Y
at (Concatenate) ) cat[0][0]',
'conv2_block2_2_c
onv[0][0]']
conv2_block3_0_bn (None, 38, 38, 128 512 ['conv2_block2_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv2_block3_0_re (None, 38, 38, 128 0 ['conv2_block3_0_b Y
lu (Activation) ) n[0][0]']
conv2_block3_1_co (None, 38, 38, 128 1638 ['conv2_block3_0_r Y
nv (Conv2D) ) 4 elu[0][0]']
conv2_block3_1_bn (None, 38, 38, 128 512 ['conv2_block3_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block3_1_re (None, 38, 38, 128 0 ['conv2_block3_1_b Y
lu (Activation) ) n[0][0]']
conv2_block3_2_co (None, 38, 38, 32) 3686 ['conv2_block3_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block3_conc (None, 38, 38, 160 0 ['conv2_block2_con Y
at (Concatenate) ) cat[0][0]',
'conv2_block3_2_c
onv[0][0]']
conv2_block4_0_bn (None, 38, 38, 160 640 ['conv2_block3_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv2_block4_0_re (None, 38, 38, 160 0 ['conv2_block4_0_b Y
lu (Activation) ) n[0][0]']
conv2_block4_1_co (None, 38, 38, 128 2048 ['conv2_block4_0_r Y
nv (Conv2D) ) 0 elu[0][0]']
conv2_block4_1_bn (None, 38, 38, 128 512 ['conv2_block4_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block4_1_re (None, 38, 38, 128 0 ['conv2_block4_1_b Y
lu (Activation) ) n[0][0]']
conv2_block4_2_co (None, 38, 38, 32) 3686 ['conv2_block4_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block4_conc (None, 38, 38, 192 0 ['conv2_block3_con Y
at (Concatenate) ) cat[0][0]',
'conv2_block4_2_c
onv[0][0]']
conv2_block5_0_bn (None, 38, 38, 192 768 ['conv2_block4_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv2_block5_0_re (None, 38, 38, 192 0 ['conv2_block5_0_b Y
lu (Activation) ) n[0][0]']
conv2_block5_1_co (None, 38, 38, 128 2457 ['conv2_block5_0_r Y
nv (Conv2D) ) 6 elu[0][0]']
conv2_block5_1_bn (None, 38, 38, 128 512 ['conv2_block5_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block5_1_re (None, 38, 38, 128 0 ['conv2_block5_1_b Y
lu (Activation) ) n[0][0]']
conv2_block5_2_co (None, 38, 38, 32) 3686 ['conv2_block5_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block5_conc (None, 38, 38, 224 0 ['conv2_block4_con Y
at (Concatenate) ) cat[0][0]',
'conv2_block5_2_c
onv[0][0]']
conv2_block6_0_bn (None, 38, 38, 224 896 ['conv2_block5_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv2_block6_0_re (None, 38, 38, 224 0 ['conv2_block6_0_b Y
lu (Activation) ) n[0][0]']
conv2_block6_1_co (None, 38, 38, 128 2867 ['conv2_block6_0_r Y
nv (Conv2D) ) 2 elu[0][0]']
conv2_block6_1_bn (None, 38, 38, 128 512 ['conv2_block6_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv2_block6_1_re (None, 38, 38, 128 0 ['conv2_block6_1_b Y
lu (Activation) ) n[0][0]']
conv2_block6_2_co (None, 38, 38, 32) 3686 ['conv2_block6_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv2_block6_conc (None, 38, 38, 256 0 ['conv2_block5_con Y
at (Concatenate) ) cat[0][0]',
'conv2_block6_2_c
onv[0][0]']
pool2_bn (BatchNo (None, 38, 38, 256 1024 ['conv2_block6_con Y
rmalization) ) cat[0][0]']
pool2_relu (Activ (None, 38, 38, 256 0 ['pool2_bn[0][0]'] Y
ation) )
pool2_conv (Conv2 (None, 38, 38, 128 3276 ['pool2_relu[0][0] Y
D) ) 8 ']
pool2_pool (Avera (None, 19, 19, 128 0 ['pool2_conv[0][0] Y
gePooling2D) ) ']
conv3_block1_0_bn (None, 19, 19, 128 512 ['pool2_pool[0][0] Y
(BatchNormalizat ) ']
ion)
conv3_block1_0_re (None, 19, 19, 128 0 ['conv3_block1_0_b Y
lu (Activation) ) n[0][0]']
conv3_block1_1_co (None, 19, 19, 128 1638 ['conv3_block1_0_r Y
nv (Conv2D) ) 4 elu[0][0]']
conv3_block1_1_bn (None, 19, 19, 128 512 ['conv3_block1_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block1_1_re (None, 19, 19, 128 0 ['conv3_block1_1_b Y
lu (Activation) ) n[0][0]']
conv3_block1_2_co (None, 19, 19, 32) 3686 ['conv3_block1_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block1_conc (None, 19, 19, 160 0 ['pool2_pool[0][0] Y
at (Concatenate) ) ',
'conv3_block1_2_c
onv[0][0]']
conv3_block2_0_bn (None, 19, 19, 160 640 ['conv3_block1_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block2_0_re (None, 19, 19, 160 0 ['conv3_block2_0_b Y
lu (Activation) ) n[0][0]']
conv3_block2_1_co (None, 19, 19, 128 2048 ['conv3_block2_0_r Y
nv (Conv2D) ) 0 elu[0][0]']
conv3_block2_1_bn (None, 19, 19, 128 512 ['conv3_block2_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block2_1_re (None, 19, 19, 128 0 ['conv3_block2_1_b Y
lu (Activation) ) n[0][0]']
conv3_block2_2_co (None, 19, 19, 32) 3686 ['conv3_block2_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block2_conc (None, 19, 19, 192 0 ['conv3_block1_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block2_2_c
onv[0][0]']
conv3_block3_0_bn (None, 19, 19, 192 768 ['conv3_block2_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block3_0_re (None, 19, 19, 192 0 ['conv3_block3_0_b Y
lu (Activation) ) n[0][0]']
conv3_block3_1_co (None, 19, 19, 128 2457 ['conv3_block3_0_r Y
nv (Conv2D) ) 6 elu[0][0]']
conv3_block3_1_bn (None, 19, 19, 128 512 ['conv3_block3_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block3_1_re (None, 19, 19, 128 0 ['conv3_block3_1_b Y
lu (Activation) ) n[0][0]']
conv3_block3_2_co (None, 19, 19, 32) 3686 ['conv3_block3_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block3_conc (None, 19, 19, 224 0 ['conv3_block2_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block3_2_c
onv[0][0]']
conv3_block4_0_bn (None, 19, 19, 224 896 ['conv3_block3_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block4_0_re (None, 19, 19, 224 0 ['conv3_block4_0_b Y
lu (Activation) ) n[0][0]']
conv3_block4_1_co (None, 19, 19, 128 2867 ['conv3_block4_0_r Y
nv (Conv2D) ) 2 elu[0][0]']
conv3_block4_1_bn (None, 19, 19, 128 512 ['conv3_block4_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block4_1_re (None, 19, 19, 128 0 ['conv3_block4_1_b Y
lu (Activation) ) n[0][0]']
conv3_block4_2_co (None, 19, 19, 32) 3686 ['conv3_block4_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block4_conc (None, 19, 19, 256 0 ['conv3_block3_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block4_2_c
onv[0][0]']
conv3_block5_0_bn (None, 19, 19, 256 1024 ['conv3_block4_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block5_0_re (None, 19, 19, 256 0 ['conv3_block5_0_b Y
lu (Activation) ) n[0][0]']
conv3_block5_1_co (None, 19, 19, 128 3276 ['conv3_block5_0_r Y
nv (Conv2D) ) 8 elu[0][0]']
conv3_block5_1_bn (None, 19, 19, 128 512 ['conv3_block5_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block5_1_re (None, 19, 19, 128 0 ['conv3_block5_1_b Y
lu (Activation) ) n[0][0]']
conv3_block5_2_co (None, 19, 19, 32) 3686 ['conv3_block5_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block5_conc (None, 19, 19, 288 0 ['conv3_block4_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block5_2_c
onv[0][0]']
conv3_block6_0_bn (None, 19, 19, 288 1152 ['conv3_block5_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block6_0_re (None, 19, 19, 288 0 ['conv3_block6_0_b Y
lu (Activation) ) n[0][0]']
conv3_block6_1_co (None, 19, 19, 128 3686 ['conv3_block6_0_r Y
nv (Conv2D) ) 4 elu[0][0]']
conv3_block6_1_bn (None, 19, 19, 128 512 ['conv3_block6_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block6_1_re (None, 19, 19, 128 0 ['conv3_block6_1_b Y
lu (Activation) ) n[0][0]']
conv3_block6_2_co (None, 19, 19, 32) 3686 ['conv3_block6_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block6_conc (None, 19, 19, 320 0 ['conv3_block5_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block6_2_c
onv[0][0]']
conv3_block7_0_bn (None, 19, 19, 320 1280 ['conv3_block6_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block7_0_re (None, 19, 19, 320 0 ['conv3_block7_0_b Y
lu (Activation) ) n[0][0]']
conv3_block7_1_co (None, 19, 19, 128 4096 ['conv3_block7_0_r Y
nv (Conv2D) ) 0 elu[0][0]']
conv3_block7_1_bn (None, 19, 19, 128 512 ['conv3_block7_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block7_1_re (None, 19, 19, 128 0 ['conv3_block7_1_b Y
lu (Activation) ) n[0][0]']
conv3_block7_2_co (None, 19, 19, 32) 3686 ['conv3_block7_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block7_conc (None, 19, 19, 352 0 ['conv3_block6_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block7_2_c
onv[0][0]']
conv3_block8_0_bn (None, 19, 19, 352 1408 ['conv3_block7_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block8_0_re (None, 19, 19, 352 0 ['conv3_block8_0_b Y
lu (Activation) ) n[0][0]']
conv3_block8_1_co (None, 19, 19, 128 4505 ['conv3_block8_0_r Y
nv (Conv2D) ) 6 elu[0][0]']
conv3_block8_1_bn (None, 19, 19, 128 512 ['conv3_block8_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block8_1_re (None, 19, 19, 128 0 ['conv3_block8_1_b Y
lu (Activation) ) n[0][0]']
conv3_block8_2_co (None, 19, 19, 32) 3686 ['conv3_block8_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block8_conc (None, 19, 19, 384 0 ['conv3_block7_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block8_2_c
onv[0][0]']
conv3_block9_0_bn (None, 19, 19, 384 1536 ['conv3_block8_con Y
(BatchNormalizat ) cat[0][0]']
ion)
conv3_block9_0_re (None, 19, 19, 384 0 ['conv3_block9_0_b Y
lu (Activation) ) n[0][0]']
conv3_block9_1_co (None, 19, 19, 128 4915 ['conv3_block9_0_r Y
nv (Conv2D) ) 2 elu[0][0]']
conv3_block9_1_bn (None, 19, 19, 128 512 ['conv3_block9_1_c Y
(BatchNormalizat ) onv[0][0]']
ion)
conv3_block9_1_re (None, 19, 19, 128 0 ['conv3_block9_1_b Y
lu (Activation) ) n[0][0]']
conv3_block9_2_co (None, 19, 19, 32) 3686 ['conv3_block9_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv3_block9_conc (None, 19, 19, 416 0 ['conv3_block8_con Y
at (Concatenate) ) cat[0][0]',
'conv3_block9_2_c
onv[0][0]']
conv3_block10_0_b (None, 19, 19, 416 1664 ['conv3_block9_con Y
n (BatchNormaliza ) cat[0][0]']
tion)
conv3_block10_0_r (None, 19, 19, 416 0 ['conv3_block10_0_ Y
elu (Activation) ) bn[0][0]']
conv3_block10_1_c (None, 19, 19, 128 5324 ['conv3_block10_0_ Y
onv (Conv2D) ) 8 relu[0][0]']
conv3_block10_1_b (None, 19, 19, 128 512 ['conv3_block10_1_ Y
n (BatchNormaliza ) conv[0][0]']
tion)
conv3_block10_1_r (None, 19, 19, 128 0 ['conv3_block10_1_ Y
elu (Activation) ) bn[0][0]']
conv3_block10_2_c (None, 19, 19, 32) 3686 ['conv3_block10_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv3_block10_con (None, 19, 19, 448 0 ['conv3_block9_con Y
cat (Concatenate) ) cat[0][0]',
'conv3_block10_2_
conv[0][0]']
conv3_block11_0_b (None, 19, 19, 448 1792 ['conv3_block10_co Y
n (BatchNormaliza ) ncat[0][0]']
tion)
conv3_block11_0_r (None, 19, 19, 448 0 ['conv3_block11_0_ Y
elu (Activation) ) bn[0][0]']
conv3_block11_1_c (None, 19, 19, 128 5734 ['conv3_block11_0_ Y
onv (Conv2D) ) 4 relu[0][0]']
conv3_block11_1_b (None, 19, 19, 128 512 ['conv3_block11_1_ Y
n (BatchNormaliza ) conv[0][0]']
tion)
conv3_block11_1_r (None, 19, 19, 128 0 ['conv3_block11_1_ Y
elu (Activation) ) bn[0][0]']
conv3_block11_2_c (None, 19, 19, 32) 3686 ['conv3_block11_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv3_block11_con (None, 19, 19, 480 0 ['conv3_block10_co Y
cat (Concatenate) ) ncat[0][0]',
'conv3_block11_2_
conv[0][0]']
conv3_block12_0_b (None, 19, 19, 480 1920 ['conv3_block11_co Y
n (BatchNormaliza ) ncat[0][0]']
tion)
conv3_block12_0_r (None, 19, 19, 480 0 ['conv3_block12_0_ Y
elu (Activation) ) bn[0][0]']
conv3_block12_1_c (None, 19, 19, 128 6144 ['conv3_block12_0_ Y
onv (Conv2D) ) 0 relu[0][0]']
conv3_block12_1_b (None, 19, 19, 128 512 ['conv3_block12_1_ Y
n (BatchNormaliza ) conv[0][0]']
tion)
conv3_block12_1_r (None, 19, 19, 128 0 ['conv3_block12_1_ Y
elu (Activation) ) bn[0][0]']
conv3_block12_2_c (None, 19, 19, 32) 3686 ['conv3_block12_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv3_block12_con (None, 19, 19, 512 0 ['conv3_block11_co Y
cat (Concatenate) ) ncat[0][0]',
'conv3_block12_2_
conv[0][0]']
pool3_bn (BatchNo (None, 19, 19, 512 2048 ['conv3_block12_co Y
rmalization) ) ncat[0][0]']
pool3_relu (Activ (None, 19, 19, 512 0 ['pool3_bn[0][0]'] Y
ation) )
pool3_conv (Conv2 (None, 19, 19, 256 1310 ['pool3_relu[0][0] Y
D) ) 72 ']
pool3_pool (Avera (None, 9, 9, 256) 0 ['pool3_conv[0][0] Y
gePooling2D) ']
conv4_block1_0_bn (None, 9, 9, 256) 1024 ['pool3_pool[0][0] Y
(BatchNormalizat ']
ion)
conv4_block1_0_re (None, 9, 9, 256) 0 ['conv4_block1_0_b Y
lu (Activation) n[0][0]']
conv4_block1_1_co (None, 9, 9, 128) 3276 ['conv4_block1_0_r Y
nv (Conv2D) 8 elu[0][0]']
conv4_block1_1_bn (None, 9, 9, 128) 512 ['conv4_block1_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block1_1_re (None, 9, 9, 128) 0 ['conv4_block1_1_b Y
lu (Activation) n[0][0]']
conv4_block1_2_co (None, 9, 9, 32) 3686 ['conv4_block1_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block1_conc (None, 9, 9, 288) 0 ['pool3_pool[0][0] Y
at (Concatenate) ',
'conv4_block1_2_c
onv[0][0]']
conv4_block2_0_bn (None, 9, 9, 288) 1152 ['conv4_block1_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block2_0_re (None, 9, 9, 288) 0 ['conv4_block2_0_b Y
lu (Activation) n[0][0]']
conv4_block2_1_co (None, 9, 9, 128) 3686 ['conv4_block2_0_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block2_1_bn (None, 9, 9, 128) 512 ['conv4_block2_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block2_1_re (None, 9, 9, 128) 0 ['conv4_block2_1_b Y
lu (Activation) n[0][0]']
conv4_block2_2_co (None, 9, 9, 32) 3686 ['conv4_block2_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block2_conc (None, 9, 9, 320) 0 ['conv4_block1_con Y
at (Concatenate) cat[0][0]',
'conv4_block2_2_c
onv[0][0]']
conv4_block3_0_bn (None, 9, 9, 320) 1280 ['conv4_block2_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block3_0_re (None, 9, 9, 320) 0 ['conv4_block3_0_b Y
lu (Activation) n[0][0]']
conv4_block3_1_co (None, 9, 9, 128) 4096 ['conv4_block3_0_r Y
nv (Conv2D) 0 elu[0][0]']
conv4_block3_1_bn (None, 9, 9, 128) 512 ['conv4_block3_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block3_1_re (None, 9, 9, 128) 0 ['conv4_block3_1_b Y
lu (Activation) n[0][0]']
conv4_block3_2_co (None, 9, 9, 32) 3686 ['conv4_block3_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block3_conc (None, 9, 9, 352) 0 ['conv4_block2_con Y
at (Concatenate) cat[0][0]',
'conv4_block3_2_c
onv[0][0]']
conv4_block4_0_bn (None, 9, 9, 352) 1408 ['conv4_block3_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block4_0_re (None, 9, 9, 352) 0 ['conv4_block4_0_b Y
lu (Activation) n[0][0]']
conv4_block4_1_co (None, 9, 9, 128) 4505 ['conv4_block4_0_r Y
nv (Conv2D) 6 elu[0][0]']
conv4_block4_1_bn (None, 9, 9, 128) 512 ['conv4_block4_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block4_1_re (None, 9, 9, 128) 0 ['conv4_block4_1_b Y
lu (Activation) n[0][0]']
conv4_block4_2_co (None, 9, 9, 32) 3686 ['conv4_block4_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block4_conc (None, 9, 9, 384) 0 ['conv4_block3_con Y
at (Concatenate) cat[0][0]',
'conv4_block4_2_c
onv[0][0]']
conv4_block5_0_bn (None, 9, 9, 384) 1536 ['conv4_block4_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block5_0_re (None, 9, 9, 384) 0 ['conv4_block5_0_b Y
lu (Activation) n[0][0]']
conv4_block5_1_co (None, 9, 9, 128) 4915 ['conv4_block5_0_r Y
nv (Conv2D) 2 elu[0][0]']
conv4_block5_1_bn (None, 9, 9, 128) 512 ['conv4_block5_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block5_1_re (None, 9, 9, 128) 0 ['conv4_block5_1_b Y
lu (Activation) n[0][0]']
conv4_block5_2_co (None, 9, 9, 32) 3686 ['conv4_block5_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block5_conc (None, 9, 9, 416) 0 ['conv4_block4_con Y
at (Concatenate) cat[0][0]',
'conv4_block5_2_c
onv[0][0]']
conv4_block6_0_bn (None, 9, 9, 416) 1664 ['conv4_block5_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block6_0_re (None, 9, 9, 416) 0 ['conv4_block6_0_b Y
lu (Activation) n[0][0]']
conv4_block6_1_co (None, 9, 9, 128) 5324 ['conv4_block6_0_r Y
nv (Conv2D) 8 elu[0][0]']
conv4_block6_1_bn (None, 9, 9, 128) 512 ['conv4_block6_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block6_1_re (None, 9, 9, 128) 0 ['conv4_block6_1_b Y
lu (Activation) n[0][0]']
conv4_block6_2_co (None, 9, 9, 32) 3686 ['conv4_block6_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block6_conc (None, 9, 9, 448) 0 ['conv4_block5_con Y
at (Concatenate) cat[0][0]',
'conv4_block6_2_c
onv[0][0]']
conv4_block7_0_bn (None, 9, 9, 448) 1792 ['conv4_block6_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block7_0_re (None, 9, 9, 448) 0 ['conv4_block7_0_b Y
lu (Activation) n[0][0]']
conv4_block7_1_co (None, 9, 9, 128) 5734 ['conv4_block7_0_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block7_1_bn (None, 9, 9, 128) 512 ['conv4_block7_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block7_1_re (None, 9, 9, 128) 0 ['conv4_block7_1_b Y
lu (Activation) n[0][0]']
conv4_block7_2_co (None, 9, 9, 32) 3686 ['conv4_block7_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block7_conc (None, 9, 9, 480) 0 ['conv4_block6_con Y
at (Concatenate) cat[0][0]',
'conv4_block7_2_c
onv[0][0]']
conv4_block8_0_bn (None, 9, 9, 480) 1920 ['conv4_block7_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block8_0_re (None, 9, 9, 480) 0 ['conv4_block8_0_b Y
lu (Activation) n[0][0]']
conv4_block8_1_co (None, 9, 9, 128) 6144 ['conv4_block8_0_r Y
nv (Conv2D) 0 elu[0][0]']
conv4_block8_1_bn (None, 9, 9, 128) 512 ['conv4_block8_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block8_1_re (None, 9, 9, 128) 0 ['conv4_block8_1_b Y
lu (Activation) n[0][0]']
conv4_block8_2_co (None, 9, 9, 32) 3686 ['conv4_block8_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block8_conc (None, 9, 9, 512) 0 ['conv4_block7_con Y
at (Concatenate) cat[0][0]',
'conv4_block8_2_c
onv[0][0]']
conv4_block9_0_bn (None, 9, 9, 512) 2048 ['conv4_block8_con Y
(BatchNormalizat cat[0][0]']
ion)
conv4_block9_0_re (None, 9, 9, 512) 0 ['conv4_block9_0_b Y
lu (Activation) n[0][0]']
conv4_block9_1_co (None, 9, 9, 128) 6553 ['conv4_block9_0_r Y
nv (Conv2D) 6 elu[0][0]']
conv4_block9_1_bn (None, 9, 9, 128) 512 ['conv4_block9_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv4_block9_1_re (None, 9, 9, 128) 0 ['conv4_block9_1_b Y
lu (Activation) n[0][0]']
conv4_block9_2_co (None, 9, 9, 32) 3686 ['conv4_block9_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv4_block9_conc (None, 9, 9, 544) 0 ['conv4_block8_con Y
at (Concatenate) cat[0][0]',
'conv4_block9_2_c
onv[0][0]']
conv4_block10_0_b (None, 9, 9, 544) 2176 ['conv4_block9_con Y
n (BatchNormaliza cat[0][0]']
tion)
conv4_block10_0_r (None, 9, 9, 544) 0 ['conv4_block10_0_ Y
elu (Activation) bn[0][0]']
conv4_block10_1_c (None, 9, 9, 128) 6963 ['conv4_block10_0_ Y
onv (Conv2D) 2 relu[0][0]']
conv4_block10_1_b (None, 9, 9, 128) 512 ['conv4_block10_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block10_1_r (None, 9, 9, 128) 0 ['conv4_block10_1_ Y
elu (Activation) bn[0][0]']
conv4_block10_2_c (None, 9, 9, 32) 3686 ['conv4_block10_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block10_con (None, 9, 9, 576) 0 ['conv4_block9_con Y
cat (Concatenate) cat[0][0]',
'conv4_block10_2_
conv[0][0]']
conv4_block11_0_b (None, 9, 9, 576) 2304 ['conv4_block10_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block11_0_r (None, 9, 9, 576) 0 ['conv4_block11_0_ Y
elu (Activation) bn[0][0]']
conv4_block11_1_c (None, 9, 9, 128) 7372 ['conv4_block11_0_ Y
onv (Conv2D) 8 relu[0][0]']
conv4_block11_1_b (None, 9, 9, 128) 512 ['conv4_block11_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block11_1_r (None, 9, 9, 128) 0 ['conv4_block11_1_ Y
elu (Activation) bn[0][0]']
conv4_block11_2_c (None, 9, 9, 32) 3686 ['conv4_block11_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block11_con (None, 9, 9, 608) 0 ['conv4_block10_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block11_2_
conv[0][0]']
conv4_block12_0_b (None, 9, 9, 608) 2432 ['conv4_block11_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block12_0_r (None, 9, 9, 608) 0 ['conv4_block12_0_ Y
elu (Activation) bn[0][0]']
conv4_block12_1_c (None, 9, 9, 128) 7782 ['conv4_block12_0_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block12_1_b (None, 9, 9, 128) 512 ['conv4_block12_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block12_1_r (None, 9, 9, 128) 0 ['conv4_block12_1_ Y
elu (Activation) bn[0][0]']
conv4_block12_2_c (None, 9, 9, 32) 3686 ['conv4_block12_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block12_con (None, 9, 9, 640) 0 ['conv4_block11_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block12_2_
conv[0][0]']
conv4_block13_0_b (None, 9, 9, 640) 2560 ['conv4_block12_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block13_0_r (None, 9, 9, 640) 0 ['conv4_block13_0_ Y
elu (Activation) bn[0][0]']
conv4_block13_1_c (None, 9, 9, 128) 8192 ['conv4_block13_0_ Y
onv (Conv2D) 0 relu[0][0]']
conv4_block13_1_b (None, 9, 9, 128) 512 ['conv4_block13_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block13_1_r (None, 9, 9, 128) 0 ['conv4_block13_1_ Y
elu (Activation) bn[0][0]']
conv4_block13_2_c (None, 9, 9, 32) 3686 ['conv4_block13_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block13_con (None, 9, 9, 672) 0 ['conv4_block12_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block13_2_
conv[0][0]']
conv4_block14_0_b (None, 9, 9, 672) 2688 ['conv4_block13_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block14_0_r (None, 9, 9, 672) 0 ['conv4_block14_0_ Y
elu (Activation) bn[0][0]']
conv4_block14_1_c (None, 9, 9, 128) 8601 ['conv4_block14_0_ Y
onv (Conv2D) 6 relu[0][0]']
conv4_block14_1_b (None, 9, 9, 128) 512 ['conv4_block14_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block14_1_r (None, 9, 9, 128) 0 ['conv4_block14_1_ Y
elu (Activation) bn[0][0]']
conv4_block14_2_c (None, 9, 9, 32) 3686 ['conv4_block14_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block14_con (None, 9, 9, 704) 0 ['conv4_block13_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block14_2_
conv[0][0]']
conv4_block15_0_b (None, 9, 9, 704) 2816 ['conv4_block14_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block15_0_r (None, 9, 9, 704) 0 ['conv4_block15_0_ Y
elu (Activation) bn[0][0]']
conv4_block15_1_c (None, 9, 9, 128) 9011 ['conv4_block15_0_ Y
onv (Conv2D) 2 relu[0][0]']
conv4_block15_1_b (None, 9, 9, 128) 512 ['conv4_block15_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block15_1_r (None, 9, 9, 128) 0 ['conv4_block15_1_ Y
elu (Activation) bn[0][0]']
conv4_block15_2_c (None, 9, 9, 32) 3686 ['conv4_block15_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block15_con (None, 9, 9, 736) 0 ['conv4_block14_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block15_2_
conv[0][0]']
conv4_block16_0_b (None, 9, 9, 736) 2944 ['conv4_block15_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block16_0_r (None, 9, 9, 736) 0 ['conv4_block16_0_ Y
elu (Activation) bn[0][0]']
conv4_block16_1_c (None, 9, 9, 128) 9420 ['conv4_block16_0_ Y
onv (Conv2D) 8 relu[0][0]']
conv4_block16_1_b (None, 9, 9, 128) 512 ['conv4_block16_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block16_1_r (None, 9, 9, 128) 0 ['conv4_block16_1_ Y
elu (Activation) bn[0][0]']
conv4_block16_2_c (None, 9, 9, 32) 3686 ['conv4_block16_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block16_con (None, 9, 9, 768) 0 ['conv4_block15_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block16_2_
conv[0][0]']
conv4_block17_0_b (None, 9, 9, 768) 3072 ['conv4_block16_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block17_0_r (None, 9, 9, 768) 0 ['conv4_block17_0_ Y
elu (Activation) bn[0][0]']
conv4_block17_1_c (None, 9, 9, 128) 9830 ['conv4_block17_0_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block17_1_b (None, 9, 9, 128) 512 ['conv4_block17_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block17_1_r (None, 9, 9, 128) 0 ['conv4_block17_1_ Y
elu (Activation) bn[0][0]']
conv4_block17_2_c (None, 9, 9, 32) 3686 ['conv4_block17_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block17_con (None, 9, 9, 800) 0 ['conv4_block16_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block17_2_
conv[0][0]']
conv4_block18_0_b (None, 9, 9, 800) 3200 ['conv4_block17_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block18_0_r (None, 9, 9, 800) 0 ['conv4_block18_0_ Y
elu (Activation) bn[0][0]']
conv4_block18_1_c (None, 9, 9, 128) 1024 ['conv4_block18_0_ Y
onv (Conv2D) 00 relu[0][0]']
conv4_block18_1_b (None, 9, 9, 128) 512 ['conv4_block18_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block18_1_r (None, 9, 9, 128) 0 ['conv4_block18_1_ Y
elu (Activation) bn[0][0]']
conv4_block18_2_c (None, 9, 9, 32) 3686 ['conv4_block18_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block18_con (None, 9, 9, 832) 0 ['conv4_block17_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block18_2_
conv[0][0]']
conv4_block19_0_b (None, 9, 9, 832) 3328 ['conv4_block18_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block19_0_r (None, 9, 9, 832) 0 ['conv4_block19_0_ Y
elu (Activation) bn[0][0]']
conv4_block19_1_c (None, 9, 9, 128) 1064 ['conv4_block19_0_ Y
onv (Conv2D) 96 relu[0][0]']
conv4_block19_1_b (None, 9, 9, 128) 512 ['conv4_block19_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block19_1_r (None, 9, 9, 128) 0 ['conv4_block19_1_ Y
elu (Activation) bn[0][0]']
conv4_block19_2_c (None, 9, 9, 32) 3686 ['conv4_block19_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block19_con (None, 9, 9, 864) 0 ['conv4_block18_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block19_2_
conv[0][0]']
conv4_block20_0_b (None, 9, 9, 864) 3456 ['conv4_block19_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block20_0_r (None, 9, 9, 864) 0 ['conv4_block20_0_ Y
elu (Activation) bn[0][0]']
conv4_block20_1_c (None, 9, 9, 128) 1105 ['conv4_block20_0_ Y
onv (Conv2D) 92 relu[0][0]']
conv4_block20_1_b (None, 9, 9, 128) 512 ['conv4_block20_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block20_1_r (None, 9, 9, 128) 0 ['conv4_block20_1_ Y
elu (Activation) bn[0][0]']
conv4_block20_2_c (None, 9, 9, 32) 3686 ['conv4_block20_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block20_con (None, 9, 9, 896) 0 ['conv4_block19_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block20_2_
conv[0][0]']
conv4_block21_0_b (None, 9, 9, 896) 3584 ['conv4_block20_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block21_0_r (None, 9, 9, 896) 0 ['conv4_block21_0_ Y
elu (Activation) bn[0][0]']
conv4_block21_1_c (None, 9, 9, 128) 1146 ['conv4_block21_0_ Y
onv (Conv2D) 88 relu[0][0]']
conv4_block21_1_b (None, 9, 9, 128) 512 ['conv4_block21_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block21_1_r (None, 9, 9, 128) 0 ['conv4_block21_1_ Y
elu (Activation) bn[0][0]']
conv4_block21_2_c (None, 9, 9, 32) 3686 ['conv4_block21_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block21_con (None, 9, 9, 928) 0 ['conv4_block20_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block21_2_
conv[0][0]']
conv4_block22_0_b (None, 9, 9, 928) 3712 ['conv4_block21_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block22_0_r (None, 9, 9, 928) 0 ['conv4_block22_0_ Y
elu (Activation) bn[0][0]']
conv4_block22_1_c (None, 9, 9, 128) 1187 ['conv4_block22_0_ Y
onv (Conv2D) 84 relu[0][0]']
conv4_block22_1_b (None, 9, 9, 128) 512 ['conv4_block22_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block22_1_r (None, 9, 9, 128) 0 ['conv4_block22_1_ Y
elu (Activation) bn[0][0]']
conv4_block22_2_c (None, 9, 9, 32) 3686 ['conv4_block22_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block22_con (None, 9, 9, 960) 0 ['conv4_block21_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block22_2_
conv[0][0]']
conv4_block23_0_b (None, 9, 9, 960) 3840 ['conv4_block22_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block23_0_r (None, 9, 9, 960) 0 ['conv4_block23_0_ Y
elu (Activation) bn[0][0]']
conv4_block23_1_c (None, 9, 9, 128) 1228 ['conv4_block23_0_ Y
onv (Conv2D) 80 relu[0][0]']
conv4_block23_1_b (None, 9, 9, 128) 512 ['conv4_block23_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block23_1_r (None, 9, 9, 128) 0 ['conv4_block23_1_ Y
elu (Activation) bn[0][0]']
conv4_block23_2_c (None, 9, 9, 32) 3686 ['conv4_block23_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block23_con (None, 9, 9, 992) 0 ['conv4_block22_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block23_2_
conv[0][0]']
conv4_block24_0_b (None, 9, 9, 992) 3968 ['conv4_block23_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv4_block24_0_r (None, 9, 9, 992) 0 ['conv4_block24_0_ Y
elu (Activation) bn[0][0]']
conv4_block24_1_c (None, 9, 9, 128) 1269 ['conv4_block24_0_ Y
onv (Conv2D) 76 relu[0][0]']
conv4_block24_1_b (None, 9, 9, 128) 512 ['conv4_block24_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv4_block24_1_r (None, 9, 9, 128) 0 ['conv4_block24_1_ Y
elu (Activation) bn[0][0]']
conv4_block24_2_c (None, 9, 9, 32) 3686 ['conv4_block24_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv4_block24_con (None, 9, 9, 1024) 0 ['conv4_block23_co Y
cat (Concatenate) ncat[0][0]',
'conv4_block24_2_
conv[0][0]']
pool4_bn (BatchNo (None, 9, 9, 1024) 4096 ['conv4_block24_co Y
rmalization) ncat[0][0]']
pool4_relu (Activ (None, 9, 9, 1024) 0 ['pool4_bn[0][0]'] Y
ation)
pool4_conv (Conv2 (None, 9, 9, 512) 5242 ['pool4_relu[0][0] Y
D) 88 ']
pool4_pool (Avera (None, 4, 4, 512) 0 ['pool4_conv[0][0] Y
gePooling2D) ']
conv5_block1_0_bn (None, 4, 4, 512) 2048 ['pool4_pool[0][0] Y
(BatchNormalizat ']
ion)
conv5_block1_0_re (None, 4, 4, 512) 0 ['conv5_block1_0_b Y
lu (Activation) n[0][0]']
conv5_block1_1_co (None, 4, 4, 128) 6553 ['conv5_block1_0_r Y
nv (Conv2D) 6 elu[0][0]']
conv5_block1_1_bn (None, 4, 4, 128) 512 ['conv5_block1_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block1_1_re (None, 4, 4, 128) 0 ['conv5_block1_1_b Y
lu (Activation) n[0][0]']
conv5_block1_2_co (None, 4, 4, 32) 3686 ['conv5_block1_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block1_conc (None, 4, 4, 544) 0 ['pool4_pool[0][0] Y
at (Concatenate) ',
'conv5_block1_2_c
onv[0][0]']
conv5_block2_0_bn (None, 4, 4, 544) 2176 ['conv5_block1_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block2_0_re (None, 4, 4, 544) 0 ['conv5_block2_0_b Y
lu (Activation) n[0][0]']
conv5_block2_1_co (None, 4, 4, 128) 6963 ['conv5_block2_0_r Y
nv (Conv2D) 2 elu[0][0]']
conv5_block2_1_bn (None, 4, 4, 128) 512 ['conv5_block2_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block2_1_re (None, 4, 4, 128) 0 ['conv5_block2_1_b Y
lu (Activation) n[0][0]']
conv5_block2_2_co (None, 4, 4, 32) 3686 ['conv5_block2_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block2_conc (None, 4, 4, 576) 0 ['conv5_block1_con Y
at (Concatenate) cat[0][0]',
'conv5_block2_2_c
onv[0][0]']
conv5_block3_0_bn (None, 4, 4, 576) 2304 ['conv5_block2_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block3_0_re (None, 4, 4, 576) 0 ['conv5_block3_0_b Y
lu (Activation) n[0][0]']
conv5_block3_1_co (None, 4, 4, 128) 7372 ['conv5_block3_0_r Y
nv (Conv2D) 8 elu[0][0]']
conv5_block3_1_bn (None, 4, 4, 128) 512 ['conv5_block3_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block3_1_re (None, 4, 4, 128) 0 ['conv5_block3_1_b Y
lu (Activation) n[0][0]']
conv5_block3_2_co (None, 4, 4, 32) 3686 ['conv5_block3_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block3_conc (None, 4, 4, 608) 0 ['conv5_block2_con Y
at (Concatenate) cat[0][0]',
'conv5_block3_2_c
onv[0][0]']
conv5_block4_0_bn (None, 4, 4, 608) 2432 ['conv5_block3_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block4_0_re (None, 4, 4, 608) 0 ['conv5_block4_0_b Y
lu (Activation) n[0][0]']
conv5_block4_1_co (None, 4, 4, 128) 7782 ['conv5_block4_0_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block4_1_bn (None, 4, 4, 128) 512 ['conv5_block4_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block4_1_re (None, 4, 4, 128) 0 ['conv5_block4_1_b Y
lu (Activation) n[0][0]']
conv5_block4_2_co (None, 4, 4, 32) 3686 ['conv5_block4_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block4_conc (None, 4, 4, 640) 0 ['conv5_block3_con Y
at (Concatenate) cat[0][0]',
'conv5_block4_2_c
onv[0][0]']
conv5_block5_0_bn (None, 4, 4, 640) 2560 ['conv5_block4_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block5_0_re (None, 4, 4, 640) 0 ['conv5_block5_0_b Y
lu (Activation) n[0][0]']
conv5_block5_1_co (None, 4, 4, 128) 8192 ['conv5_block5_0_r Y
nv (Conv2D) 0 elu[0][0]']
conv5_block5_1_bn (None, 4, 4, 128) 512 ['conv5_block5_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block5_1_re (None, 4, 4, 128) 0 ['conv5_block5_1_b Y
lu (Activation) n[0][0]']
conv5_block5_2_co (None, 4, 4, 32) 3686 ['conv5_block5_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block5_conc (None, 4, 4, 672) 0 ['conv5_block4_con Y
at (Concatenate) cat[0][0]',
'conv5_block5_2_c
onv[0][0]']
conv5_block6_0_bn (None, 4, 4, 672) 2688 ['conv5_block5_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block6_0_re (None, 4, 4, 672) 0 ['conv5_block6_0_b Y
lu (Activation) n[0][0]']
conv5_block6_1_co (None, 4, 4, 128) 8601 ['conv5_block6_0_r Y
nv (Conv2D) 6 elu[0][0]']
conv5_block6_1_bn (None, 4, 4, 128) 512 ['conv5_block6_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block6_1_re (None, 4, 4, 128) 0 ['conv5_block6_1_b Y
lu (Activation) n[0][0]']
conv5_block6_2_co (None, 4, 4, 32) 3686 ['conv5_block6_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block6_conc (None, 4, 4, 704) 0 ['conv5_block5_con Y
at (Concatenate) cat[0][0]',
'conv5_block6_2_c
onv[0][0]']
conv5_block7_0_bn (None, 4, 4, 704) 2816 ['conv5_block6_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block7_0_re (None, 4, 4, 704) 0 ['conv5_block7_0_b Y
lu (Activation) n[0][0]']
conv5_block7_1_co (None, 4, 4, 128) 9011 ['conv5_block7_0_r Y
nv (Conv2D) 2 elu[0][0]']
conv5_block7_1_bn (None, 4, 4, 128) 512 ['conv5_block7_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block7_1_re (None, 4, 4, 128) 0 ['conv5_block7_1_b Y
lu (Activation) n[0][0]']
conv5_block7_2_co (None, 4, 4, 32) 3686 ['conv5_block7_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block7_conc (None, 4, 4, 736) 0 ['conv5_block6_con Y
at (Concatenate) cat[0][0]',
'conv5_block7_2_c
onv[0][0]']
conv5_block8_0_bn (None, 4, 4, 736) 2944 ['conv5_block7_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block8_0_re (None, 4, 4, 736) 0 ['conv5_block8_0_b Y
lu (Activation) n[0][0]']
conv5_block8_1_co (None, 4, 4, 128) 9420 ['conv5_block8_0_r Y
nv (Conv2D) 8 elu[0][0]']
conv5_block8_1_bn (None, 4, 4, 128) 512 ['conv5_block8_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block8_1_re (None, 4, 4, 128) 0 ['conv5_block8_1_b Y
lu (Activation) n[0][0]']
conv5_block8_2_co (None, 4, 4, 32) 3686 ['conv5_block8_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block8_conc (None, 4, 4, 768) 0 ['conv5_block7_con Y
at (Concatenate) cat[0][0]',
'conv5_block8_2_c
onv[0][0]']
conv5_block9_0_bn (None, 4, 4, 768) 3072 ['conv5_block8_con Y
(BatchNormalizat cat[0][0]']
ion)
conv5_block9_0_re (None, 4, 4, 768) 0 ['conv5_block9_0_b Y
lu (Activation) n[0][0]']
conv5_block9_1_co (None, 4, 4, 128) 9830 ['conv5_block9_0_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block9_1_bn (None, 4, 4, 128) 512 ['conv5_block9_1_c Y
(BatchNormalizat onv[0][0]']
ion)
conv5_block9_1_re (None, 4, 4, 128) 0 ['conv5_block9_1_b Y
lu (Activation) n[0][0]']
conv5_block9_2_co (None, 4, 4, 32) 3686 ['conv5_block9_1_r Y
nv (Conv2D) 4 elu[0][0]']
conv5_block9_conc (None, 4, 4, 800) 0 ['conv5_block8_con Y
at (Concatenate) cat[0][0]',
'conv5_block9_2_c
onv[0][0]']
conv5_block10_0_b (None, 4, 4, 800) 3200 ['conv5_block9_con Y
n (BatchNormaliza cat[0][0]']
tion)
conv5_block10_0_r (None, 4, 4, 800) 0 ['conv5_block10_0_ Y
elu (Activation) bn[0][0]']
conv5_block10_1_c (None, 4, 4, 128) 1024 ['conv5_block10_0_ Y
onv (Conv2D) 00 relu[0][0]']
conv5_block10_1_b (None, 4, 4, 128) 512 ['conv5_block10_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block10_1_r (None, 4, 4, 128) 0 ['conv5_block10_1_ Y
elu (Activation) bn[0][0]']
conv5_block10_2_c (None, 4, 4, 32) 3686 ['conv5_block10_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block10_con (None, 4, 4, 832) 0 ['conv5_block9_con Y
cat (Concatenate) cat[0][0]',
'conv5_block10_2_
conv[0][0]']
conv5_block11_0_b (None, 4, 4, 832) 3328 ['conv5_block10_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block11_0_r (None, 4, 4, 832) 0 ['conv5_block11_0_ Y
elu (Activation) bn[0][0]']
conv5_block11_1_c (None, 4, 4, 128) 1064 ['conv5_block11_0_ Y
onv (Conv2D) 96 relu[0][0]']
conv5_block11_1_b (None, 4, 4, 128) 512 ['conv5_block11_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block11_1_r (None, 4, 4, 128) 0 ['conv5_block11_1_ Y
elu (Activation) bn[0][0]']
conv5_block11_2_c (None, 4, 4, 32) 3686 ['conv5_block11_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block11_con (None, 4, 4, 864) 0 ['conv5_block10_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block11_2_
conv[0][0]']
conv5_block12_0_b (None, 4, 4, 864) 3456 ['conv5_block11_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block12_0_r (None, 4, 4, 864) 0 ['conv5_block12_0_ Y
elu (Activation) bn[0][0]']
conv5_block12_1_c (None, 4, 4, 128) 1105 ['conv5_block12_0_ Y
onv (Conv2D) 92 relu[0][0]']
conv5_block12_1_b (None, 4, 4, 128) 512 ['conv5_block12_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block12_1_r (None, 4, 4, 128) 0 ['conv5_block12_1_ Y
elu (Activation) bn[0][0]']
conv5_block12_2_c (None, 4, 4, 32) 3686 ['conv5_block12_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block12_con (None, 4, 4, 896) 0 ['conv5_block11_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block12_2_
conv[0][0]']
conv5_block13_0_b (None, 4, 4, 896) 3584 ['conv5_block12_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block13_0_r (None, 4, 4, 896) 0 ['conv5_block13_0_ Y
elu (Activation) bn[0][0]']
conv5_block13_1_c (None, 4, 4, 128) 1146 ['conv5_block13_0_ Y
onv (Conv2D) 88 relu[0][0]']
conv5_block13_1_b (None, 4, 4, 128) 512 ['conv5_block13_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block13_1_r (None, 4, 4, 128) 0 ['conv5_block13_1_ Y
elu (Activation) bn[0][0]']
conv5_block13_2_c (None, 4, 4, 32) 3686 ['conv5_block13_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block13_con (None, 4, 4, 928) 0 ['conv5_block12_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block13_2_
conv[0][0]']
conv5_block14_0_b (None, 4, 4, 928) 3712 ['conv5_block13_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block14_0_r (None, 4, 4, 928) 0 ['conv5_block14_0_ Y
elu (Activation) bn[0][0]']
conv5_block14_1_c (None, 4, 4, 128) 1187 ['conv5_block14_0_ Y
onv (Conv2D) 84 relu[0][0]']
conv5_block14_1_b (None, 4, 4, 128) 512 ['conv5_block14_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block14_1_r (None, 4, 4, 128) 0 ['conv5_block14_1_ Y
elu (Activation) bn[0][0]']
conv5_block14_2_c (None, 4, 4, 32) 3686 ['conv5_block14_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block14_con (None, 4, 4, 960) 0 ['conv5_block13_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block14_2_
conv[0][0]']
conv5_block15_0_b (None, 4, 4, 960) 3840 ['conv5_block14_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block15_0_r (None, 4, 4, 960) 0 ['conv5_block15_0_ Y
elu (Activation) bn[0][0]']
conv5_block15_1_c (None, 4, 4, 128) 1228 ['conv5_block15_0_ Y
onv (Conv2D) 80 relu[0][0]']
conv5_block15_1_b (None, 4, 4, 128) 512 ['conv5_block15_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block15_1_r (None, 4, 4, 128) 0 ['conv5_block15_1_ Y
elu (Activation) bn[0][0]']
conv5_block15_2_c (None, 4, 4, 32) 3686 ['conv5_block15_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block15_con (None, 4, 4, 992) 0 ['conv5_block14_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block15_2_
conv[0][0]']
conv5_block16_0_b (None, 4, 4, 992) 3968 ['conv5_block15_co Y
n (BatchNormaliza ncat[0][0]']
tion)
conv5_block16_0_r (None, 4, 4, 992) 0 ['conv5_block16_0_ Y
elu (Activation) bn[0][0]']
conv5_block16_1_c (None, 4, 4, 128) 1269 ['conv5_block16_0_ Y
onv (Conv2D) 76 relu[0][0]']
conv5_block16_1_b (None, 4, 4, 128) 512 ['conv5_block16_1_ Y
n (BatchNormaliza conv[0][0]']
tion)
conv5_block16_1_r (None, 4, 4, 128) 0 ['conv5_block16_1_ Y
elu (Activation) bn[0][0]']
conv5_block16_2_c (None, 4, 4, 32) 3686 ['conv5_block16_1_ Y
onv (Conv2D) 4 relu[0][0]']
conv5_block16_con (None, 4, 4, 1024) 0 ['conv5_block15_co Y
cat (Concatenate) ncat[0][0]',
'conv5_block16_2_
conv[0][0]']
bn (BatchNormaliz (None, 4, 4, 1024) 4096 ['conv5_block16_co Y
ation) ncat[0][0]']
relu (Activation) (None, 4, 4, 1024) 0 ['bn[0][0]'] Y
================================================================================
Total params: 7037504 (26.85 MB)
Trainable params: 6953856 (26.53 MB)
Non-trainable params: 83648 (326.75 KB)
________________________________________________________________________________
Model: "sequential_19"
________________________________________________________________________________
Layer (type) Output Shape Param # Trainable
================================================================================
densenet121 (Functional) (None, 4, 4, 1024) 7037504 N
flatten_19 (Flatten) (None, 16384) 0 Y
dense_82 (Dense) (None, 128) 2097280 Y
dropout_42 (Dropout) (None, 128) 0 Y
dense_81 (Dense) (None, 128) 16512 Y
dropout_41 (Dropout) (None, 128) 0 Y
dense_80 (Dense) (None, 32) 4128 Y
dense_79 (Dense) (None, 14) 462 Y
================================================================================
Total params: 9155886 (34.93 MB)
Trainable params: 2118382 (8.08 MB)
Non-trainable params: 7037504 (26.85 MB)
________________________________________________________________________________
Podsumowanie i wnioski
| loss | accuracy | recall | precision | auc | |
|---|---|---|---|---|---|
| model 1 | 2.64 | 0.09 | 0.01 | 0.67 | 0.56 |
| model 2 | 2.40 | 0.24 | 0.02 | 0.43 | 0.71 |
| model 3 | 1.88 | 0.42 | 0.21 | 0.70 | 0.85 |
| model 4 | 1.96 | 0.47 | 0.32 | 0.62 | 0.84 |
| model 5 | 1.27 | 0.66 | 0.57 | 0.83 | 0.93 |
| model 6 | 1.14 | 0.69 | 0.61 | 0.88 | 0.94 |
| model 7 | 1.03 | 0.71 | 0.63 | 0.86 | 0.95 |
Na podstawie Tabela 2 możemy zauważyć różnice pomiędzy różnymi podejściami do tworzenia architektur sieci neuronowych. Pierwsze modele, które są na podstawie warstw gęstych wyszły najgorzej i przeuczyły się.
Modele bazowane na warstwach konwolucyjnych znacznie lepiej poradziły sobie z tym zadaniem, natomiast ta poprawa wymagała bardzo długiego czasu uczenia.
Na koniec użyłem sieci wstępnie nauczonych, gdzie densenet sprawdził się najlepiej. Wynik accuracy na poziomie 0.71 nie jest idealny, natomiast jest sporą przepaścią względem początkowych modeli.